# Multi-task Generalization
GLM 4 9B 0414 GGUF
MIT
GLM-4-9B-0414 is a lightweight member of the GLM family with 9 billion parameters, excelling in mathematical reasoning and general tasks, providing an efficient solution for resource-constrained scenarios.
Large Language Model Supports Multiple Languages
G
unsloth
4,291
9
Llama 3.1 MIG Tulu 3 8B SFT
Apache-2.0
Llama-3.1-8B model fine-tuned using the automatically filtered 50,000-entry Tulu-3-MIG dataset
Large Language Model
Transformers

L
xsample
26
3
Llama 3.1 8b Medusa V1.01
An 8B-parameter language model based on the Llama 3.1 architecture, created by merging multiple specialized models, excelling in text generation tasks.
Large Language Model
Transformers

L
Nexesenex
95
3
Unhinged Author 70B
A 70B-parameter large language model merged using the TIES method, based on Steelskull/L3.3-MS-Nevoria-70b and fused with the DeepSeek-R1-Distill-Llama-70B model
Large Language Model
Transformers

U
FiditeNemini
44
3
Llama3.1 Gutenberg Doppel 70B
A large language model based on Hermes-3-Llama-3.1-70B-lorablated, fine-tuned on the Gutenberg dataset
Large Language Model
Transformers

L
nbeerbower
424
6
Lwm
LWM is the first foundational model in the field of wireless communications, developed as a universal feature extractor capable of extracting fine-grained representations from wireless channel data.
Physics Model
Transformers

L
wi-lab
137
3
Robustsam Vit Large
MIT
RobustSAM is a model for robust segmentation of arbitrary objects in degraded images, improved upon SAM with enhanced segmentation performance on low-quality images.
Image Segmentation
Transformers Other

R
jadechoghari
86
4
Percival 01 7b Slerp
Apache-2.0
Percival_01-7b-slerp is a 7B-parameter large language model ranked second on the OPENLLM leaderboard, obtained by merging the liminerity/M7-7b and Gille/StrangeMerges_32-7B-slerp models using the LazyMergekit tool.
Large Language Model
Transformers

P
AurelPx
24
4
UNA TheBeagle 7b V1
TheBeagle is a 7-billion-parameter model trained on The Bagel dataset, optimized with DPO (Direct Preference Optimization) and UNA (Unified Neural Architecture) techniques, demonstrating excellent performance in multi-task scenarios.
Large Language Model
Transformers

U
fblgit
88
37
Locutusquexfelladrin TinyMistral248M Instruct
Apache-2.0
This model is created by merging Locutusque/TinyMistral-248M-Instruct and Felladrin/TinyMistral-248M-SFT-v4 using the mergekit tool, combining the strengths of both. It possesses programming capabilities and reasoning skills while maintaining low hallucination and strong instruction-following abilities.
Large Language Model
Transformers English

L
Locutusque
97
7
Psyfighter 13B
This is a hybrid model based on Llama-2-13B, incorporating features from multiple models including Tiefighter, MedLLaMA, and limarp-v2, suitable for various text generation tasks.
Large Language Model
Transformers

P
jebcarter
86
12
Hh Rlhf Rm Open Llama 3b
A reward model trained based on the LMFlow framework. It is trained on the HH - RLHF dataset (only the useful part) with open_llama_3b as the base model and has good generalization ability.
Large Language Model
Transformers

H
weqweasdas
483
18
Featured Recommended AI Models